Ho-Kashyap classifier with early stopping for regularization

نویسندگان

  • Fabien Lauer
  • Gérard Bloch
چکیده

This paper focuses on linear classification using a fast and simple algorithm known as the Ho–Kashyap learning rule (HK). In order to avoid overfitting and instead of adding a regularization parameter in the criterion, early stopping is introduced as a regularization method for HK learning, which becomes HKES (Ho–Kashyap with Early Stopping). Furthermore, an automatic procedure, based on generalization error estimation, is proposed to tune the stopping time. The method is then tested and compared to others (including SVM and LSVM), that use either l1 or l2-norm of the errors, on well-known benchmarks. The results show the limits of the early stopping for regularization with respect to the generalization error estimation and the drawbacks of low level hyperparameters such as a number of iterations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ho–Kashyap with Early Stopping vs Soft Margin SVM for Linear Classifiers – An Application

In a classification problem, hard margin SVMs tend to minimize the generalization error by maximizing the margin. Regularization is obtained with soft margin SVMs which improve performances by relaxing the constraints on the margin maximization. This article shows that comparable performances can be obtained in the linearly separable case with the Ho–Kashyap learning rule associated to early st...

متن کامل

Ho-Kashyap with Early Stopping Versus Soft Margin SVM for Linear Classifiers - An Application

In a classification problem, hard margin SVMs tend to minimize the generalization error by maximizing the margin. Regularization is obtained with soft margin SVMs which improve performances by relaxing the constraints on the margin maximization. This article shows that comparable performances can be obtained in the linearly separable case with the Ho–Kashyap learning rule associated to early st...

متن کامل

Regularization by Early Stopping in Single Layer Perceptron Training

Adaptative training of the non-linear single-layer perceptron can lead to the Euclidean distance classifier and later to the standard Fisher linear discriminant function. On the way between these two classifiers one has a regularized discriminant analysis. That is equivalent to the “weight decay” regularization term added to the cost function. Thus early stopping plays a role of regularization ...

متن کامل

Matrix-pattern-oriented Ho-Kashyap classifier with regularization learning

Existing classifier designs generally base on vector pattern, hence, when a non-vector pattern such as a face image as the input to the classifier, it has to be first concatenated to a vector. In this paper, we, instead, explore using a set of given matrix patterns to design a classifier. For this, first we represent a pattern in matrix form and recast existing vector-based classifiers to their...

متن کامل

Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers

There is a large literature explaining why AdaBoost is a successful classifier. The literature on AdaBoost focuses on classifier margins and boosting's interpretation as the optimization of an exponential likelihood function. These existing explanations, however, have been pointed out to be incomplete. A random forest is another popular ensemble method for which there is substantially less expl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Pattern Recognition Letters

دوره 27  شماره 

صفحات  -

تاریخ انتشار 2006